7 research outputs found

    3D objects and scenes classification, recognition, segmentation, and reconstruction using 3D point cloud data: A review

    Full text link
    Three-dimensional (3D) point cloud analysis has become one of the attractive subjects in realistic imaging and machine visions due to its simplicity, flexibility and powerful capacity of visualization. Actually, the representation of scenes and buildings using 3D shapes and formats leveraged many applications among which automatic driving, scenes and objects reconstruction, etc. Nevertheless, working with this emerging type of data has been a challenging task for objects representation, scenes recognition, segmentation, and reconstruction. In this regard, a significant effort has recently been devoted to developing novel strategies, using different techniques such as deep learning models. To that end, we present in this paper a comprehensive review of existing tasks on 3D point cloud: a well-defined taxonomy of existing techniques is performed based on the nature of the adopted algorithms, application scenarios, and main objectives. Various tasks performed on 3D point could data are investigated, including objects and scenes detection, recognition, segmentation and reconstruction. In addition, we introduce a list of used datasets, we discuss respective evaluation metrics and we compare the performance of existing solutions to better inform the state-of-the-art and identify their limitations and strengths. Lastly, we elaborate on current challenges facing the subject of technology and future trends attracting considerable interest, which could be a starting point for upcoming research studie

    Access point backhaul capacity aggregation as a matching game in the context of wireless local area networks

    No full text
    International audienceWe propose, in this paper, a novel approach of Access Point (AP) backhaul capacity aggregation in the context of Wireless Local Area Networks (WLAN). Our purpose is to optimize the Available Backhaul Resource (ABR) allocation between APs belonging to different Basic Service Sets (BSSs), so as to relieve pressure of traffic offloaded from non-WiFi networks. In this context, we define two different sets of APs within the same WLAN: the ones with spare backhaul resources (Providers) and those in lack of it (Beneficiaries). Considering the two sided nature of this system, we model our assignment problem using matching games: one-to-one, many-to-one and many-to-many, and solve it using the so-called Deferred Acceptance (DA) algorithm leading to an optimal pairwise stable matching. Our results show that, compared to a legacy WLAN, the overall performance is substantially improved in terms of AP throughput gain, AP satisfaction percentage and load balancing between different AP

    Access Point Backhaul Resource Aggregation as a Many-to-One Matching Game in Wireless Local Area Networks

    No full text
    This paper studies backhaul bandwidth aggregation in the context of a wireless local area network composed of two different types of access points: those with spare backhaul capacity (which we term providers) and those in shortage of it (beneficiaries); the aim is to transfer excess capacity from providers to beneficiaries. We model the system as a matching game with many-to-one setting wherein several providers can be matched to one beneficiary and adopt the so-called deferred acceptance algorithm to reach an optimal and stable solution. We consider two flavors, when the beneficiaries are limited in their resource demands and when they are not, and two scenarios, when resources are abundant and when they are scarce. Our results show that the many-to-one setting outperforms the one-to-one case in terms of overall throughput gain, resource usage, and individual beneficiaries satisfaction by up to 50%, whether resources are scarce or abundant. As of the limited versus nonlimited case, the former ensures more fair sharing of spectral resources and higher satisfaction percentage between beneficiaries

    Proposal for access point backhaul resource aggregation and its modeling using one-to-one matching game

    No full text
    International audienceIn this paper, we propose a new scheme for aggregating access points backhaul bandwidth resources in wireless local area networks. We consider a setting with several access points, some with excess backhaul capacity, and others in shortage of it, and propose a symmetric approach wherein the distribution of resources is managed between providing and beneficiary access points, without involving the direct participation of the users. We then model the proposal using one-to-one matching game theory and make use of the so-called deferred access (DA) algorithm to reach an optimal, stable solution for the problem. In comparison with random distribution of available backhaul resources, the DA scheme offers significantly better performance in terms of overall throughput, reaching up to 40% when compared to random matchin

    Important complexity reduction of random forest in multi-classification problem

    No full text
    Algorithm complexity in machine learning problems has been a real concern especially with large-scaled systems. By increasing data dimensionality, a particular emphasis is placed on designing computationally efficient learning models. In this paper, we propose an approach to improve the complexity of a multi-classification learning problem in cloud networks. Based on the Random Forest algorithm and the highly dimensional UNSW-NB 15 dataset, a tuning of the algorithm is first performed to reduce the number of grown trees used during classification. Then, we apply an importance-based feature selection to optimize the number of predictors involved in the learning process. All of these optimizations, implemented with respect to the best performance recorded by our classifier, yield substantial improvement in terms of computational complexity both during training and prediction phases. - 2019 IEEE.This publication was made possible by NPRP grant 8-634-1-131 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Scopu

    Data Augmentation for Intrusion Detection and Classification in Cloud Networks

    No full text
    Cloud computing is a paradigm that provides multiple services over the internet with high flexibility in a cost-effective way. However, the growth of cloud-based services comes with major security issues. Recently, machine learning techniques are gaining much interest in security applications as they exhibit fast processing capabilities with real-time predictions. One major challenge in the implementation of these techniques is the available training data for each new potential attack category. In this paper, we propose a new model for secure network based on machine learning algorithms. The proposed model ensures better learning of minority classes using Generative Adversarial Network (GAN) architecture. In particular, the new model optimizes the GAN parameter including the number of inner learning steps for the discriminator to balance the training datasets. Then, the optimized GAN generates highly informative"like real" instances to be appended to the original data which improve the detection of the classes with relatively small training data. Our experimental results show that the proposed approach enhances the overall classification performance and detection accuracy even for the rarely detectable classes for both UNSW and NSL-KDD datasets. The simulation results show also that the proposed model could detect better the network attacks compared to the state-of-art techniques. 2021 IEEEACKNOWLEDGMENT This work was supported by Qatar University Internal Grant IRCC-2020-001. The statements made herein are solely the responsibility of the author[s].Scopu
    corecore